Recognition of line graph images in documents by tracing connected components
Identifieur interne : 000E03 ( Main/Exploration ); précédent : 000E02; suivant : 000E04Recognition of line graph images in documents by tracing connected components
Auteurs : Toshimichi Fuda [Japon] ; Shinichiro Omachi [Japon] ; Hirotomo Aso [Japon]Source :
- Systems and Computers in Japan [ 0882-1666 ] ; 2007-12.
English descriptors
Abstract
Up to the present there has been a great deal of research related to recognition and comprehension of document images. A large amount of this research focused on text although a significant amount of information in graph images within document images is also included. If it is possible to recognize and comprehend a graph image, electronic documents can be used more efficiently. Research on recognition methods focusing on graph images within document images dealt with solid line graph images (bar graphs and markers). This paper proposes a recognition method of graphs focusing on line graph images which do not include markers comprised by solid lines, dotted lines, broken lines, and dash‐dot lines. This paper also shows the effectiveness of this through experimentation. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(14): 103–114, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10615
Url:
DOI: 10.1002/scj.10615
Affiliations:
Links toward previous steps (curation, corpus...)
- to stream Istex, to step Corpus: 002287
- to stream Istex, to step Curation: 002133
- to stream Istex, to step Checkpoint: 000818
- to stream Main, to step Merge: 000E16
- to stream Main, to step Curation: 000E03
Le document en format XML
<record><TEI wicri:istexFullTextTei="biblStruct"><teiHeader><fileDesc><titleStmt><title xml:lang="en">Recognition of line graph images in documents by tracing connected components</title>
<author><name sortKey="Fuda, Toshimichi" sort="Fuda, Toshimichi" uniqKey="Fuda T" first="Toshimichi" last="Fuda">Toshimichi Fuda</name>
</author>
<author><name sortKey="Omachi, Shinichiro" sort="Omachi, Shinichiro" uniqKey="Omachi S" first="Shinichiro" last="Omachi">Shinichiro Omachi</name>
</author>
<author><name sortKey="Aso, Hirotomo" sort="Aso, Hirotomo" uniqKey="Aso H" first="Hirotomo" last="Aso">Hirotomo Aso</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:82EE0B702C2E1465C5B71DBEAE44D87F3A6AF8F0</idno>
<date when="2007" year="2007">2007</date>
<idno type="doi">10.1002/scj.10615</idno>
<idno type="url">https://api.istex.fr/document/82EE0B702C2E1465C5B71DBEAE44D87F3A6AF8F0/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">002287</idno>
<idno type="wicri:Area/Istex/Curation">002133</idno>
<idno type="wicri:Area/Istex/Checkpoint">000818</idno>
<idno type="wicri:doubleKey">0882-1666:2007:Fuda T:recognition:of:line</idno>
<idno type="wicri:Area/Main/Merge">000E16</idno>
<idno type="wicri:Area/Main/Curation">000E03</idno>
<idno type="wicri:Area/Main/Exploration">000E03</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title level="a" type="main" xml:lang="en">Recognition of line graph images in documents by tracing connected components</title>
<author><name sortKey="Fuda, Toshimichi" sort="Fuda, Toshimichi" uniqKey="Fuda T" first="Toshimichi" last="Fuda">Toshimichi Fuda</name>
<affiliation wicri:level="4"><country xml:lang="fr">Japon</country>
<wicri:regionArea>Graduate School of Engineering, Tohoku University, Sendai</wicri:regionArea>
<orgName type="university">Université du Tōhoku</orgName>
<placeName><settlement type="city">Sendai</settlement>
<region type="province">Région de Tōhoku</region>
</placeName>
</affiliation>
</author>
<author><name sortKey="Omachi, Shinichiro" sort="Omachi, Shinichiro" uniqKey="Omachi S" first="Shinichiro" last="Omachi">Shinichiro Omachi</name>
<affiliation wicri:level="4"><country xml:lang="fr">Japon</country>
<wicri:regionArea>Graduate School of Engineering, Tohoku University, Sendai</wicri:regionArea>
<orgName type="university">Université du Tōhoku</orgName>
<placeName><settlement type="city">Sendai</settlement>
<region type="province">Région de Tōhoku</region>
</placeName>
</affiliation>
</author>
<author><name sortKey="Aso, Hirotomo" sort="Aso, Hirotomo" uniqKey="Aso H" first="Hirotomo" last="Aso">Hirotomo Aso</name>
<affiliation wicri:level="4"><country xml:lang="fr">Japon</country>
<wicri:regionArea>Graduate School of Engineering, Tohoku University, Sendai</wicri:regionArea>
<orgName type="university">Université du Tōhoku</orgName>
<placeName><settlement type="city">Sendai</settlement>
<region type="province">Région de Tōhoku</region>
</placeName>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series><title level="j">Systems and Computers in Japan</title>
<title level="j" type="abbrev">Syst. Comp. Jpn.</title>
<idno type="ISSN">0882-1666</idno>
<idno type="eISSN">1520-684X</idno>
<imprint><publisher>Wiley Subscription Services, Inc., A Wiley Company</publisher>
<pubPlace>Hoboken</pubPlace>
<date type="published" when="2007-12">2007-12</date>
<biblScope unit="volume">38</biblScope>
<biblScope unit="issue">14</biblScope>
<biblScope unit="page" from="103">103</biblScope>
<biblScope unit="page" to="114">114</biblScope>
</imprint>
<idno type="ISSN">0882-1666</idno>
</series>
<idno type="istex">82EE0B702C2E1465C5B71DBEAE44D87F3A6AF8F0</idno>
<idno type="DOI">10.1002/scj.10615</idno>
<idno type="ArticleID">SCJ10615</idno>
</biblStruct>
</sourceDesc>
<seriesStmt><idno type="ISSN">0882-1666</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>OCR</term>
<term>connected components</term>
<term>document recognition</term>
<term>graph recognition</term>
</keywords>
</textClass>
<langUsage><language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Up to the present there has been a great deal of research related to recognition and comprehension of document images. A large amount of this research focused on text although a significant amount of information in graph images within document images is also included. If it is possible to recognize and comprehend a graph image, electronic documents can be used more efficiently. Research on recognition methods focusing on graph images within document images dealt with solid line graph images (bar graphs and markers). This paper proposes a recognition method of graphs focusing on line graph images which do not include markers comprised by solid lines, dotted lines, broken lines, and dash‐dot lines. This paper also shows the effectiveness of this through experimentation. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(14): 103–114, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10615</div>
</front>
</TEI>
<affiliations><list><country><li>Japon</li>
</country>
<region><li>Région de Tōhoku</li>
</region>
<settlement><li>Sendai</li>
</settlement>
<orgName><li>Université du Tōhoku</li>
</orgName>
</list>
<tree><country name="Japon"><region name="Région de Tōhoku"><name sortKey="Fuda, Toshimichi" sort="Fuda, Toshimichi" uniqKey="Fuda T" first="Toshimichi" last="Fuda">Toshimichi Fuda</name>
</region>
<name sortKey="Aso, Hirotomo" sort="Aso, Hirotomo" uniqKey="Aso H" first="Hirotomo" last="Aso">Hirotomo Aso</name>
<name sortKey="Omachi, Shinichiro" sort="Omachi, Shinichiro" uniqKey="Omachi S" first="Shinichiro" last="Omachi">Shinichiro Omachi</name>
</country>
</tree>
</affiliations>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/OcrV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000E03 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000E03 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= OcrV1 |flux= Main |étape= Exploration |type= RBID |clé= ISTEX:82EE0B702C2E1465C5B71DBEAE44D87F3A6AF8F0 |texte= Recognition of line graph images in documents by tracing connected components }}
This area was generated with Dilib version V0.6.32. |